Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 84
Filtrar
2.
Artículo en Inglés | MEDLINE | ID: mdl-38526613

RESUMEN

PURPOSE: Efficient and precise surgical skills are essential in ensuring positive patient outcomes. By continuously providing real-time, data driven, and objective evaluation of surgical performance, automated skill assessment has the potential to greatly improve surgical skill training. Whereas machine learning-based surgical skill assessment is gaining traction for minimally invasive techniques, this cannot be said for open surgery skills. Open surgery generally has more degrees of freedom when compared to minimally invasive surgery, making it more difficult to interpret. In this paper, we present novel approaches for skill assessment for open surgery skills. METHODS: We analyzed a novel video dataset for open suturing training. We provide a detailed analysis of the dataset and define evaluation guidelines, using state of the art deep learning models. Furthermore, we present novel benchmarking results for surgical skill assessment in open suturing. The models are trained to classify a video into three skill levels based on the global rating score. To obtain initial results for video-based surgical skill classification, we benchmarked a temporal segment network with both an I3D and a Video Swin backbone on this dataset. RESULTS: The dataset is composed of 314 videos of approximately five minutes each. Model benchmarking results are an accuracy and F1 score of up to 75 and 72%, respectively. This is similar to the performance achieved by the individual raters, regarding inter-rater agreement and rater variability. We present the first end-to-end trained approach for skill assessment for open surgery training. CONCLUSION: We provide a thorough analysis of a new dataset as well as novel benchmarking results for surgical skill assessment. This opens the doors to new advances in skill assessment by enabling video-based skill assessment for classic surgical techniques with the potential to improve the surgical outcome of patients.

3.
BMC Med Educ ; 24(1): 250, 2024 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-38500112

RESUMEN

OBJECTIVE: The gold standard of oral cancer (OC) treatment is diagnostic confirmation by biopsy followed by surgical treatment. However, studies have shown that dentists have difficulty performing biopsies, dental students lack knowledge about OC, and surgeons do not always maintain a safe margin during tumor resection. To address this, biopsies and resections could be trained under realistic conditions outside the patient. The aim of this study was to develop and to validate a porcine pseudotumor model of the tongue. METHODS: An interdisciplinary team reflecting various specialties involved in the oncological treatment of head and neck oncology developed a porcine pseudotumor model of the tongue in which biopsies and resections can be practiced. The refined model was validated in a final trial of 10 participants who each resected four pseudotumors on a tongue, resulting in a total of 40 resected pseudotumors. The participants (7 residents and 3 specialists) had an experience in OC treatment ranging from 0.5 to 27 years. Resection margins (minimum and maximum) were assessed macroscopically and compared beside self-assessed margins and resection time between residents and specialists. Furthermore, the model was evaluated using Likert-type questions on haptic and radiological fidelity, its usefulness as a training model, as well as its imageability using CT and ultrasound. RESULTS: The model haptically resembles OC (3.0 ± 0.5; 4-point Likert scale), can be visualized with medical imaging and macroscopically evaluated immediately after resection providing feedback. Although, participants (3.2 ± 0.4) tended to agree that they had resected the pseudotumor with an ideal safety margin (10 mm), the mean minimum resection margin was insufficient at 4.2 ± 1.2 mm (mean ± SD), comparable to reported margins in literature. Simultaneously, a maximum resection margin of 18.4 ± 6.1 mm was measured, indicating partial over-resection. Although specialists were faster at resection (p < 0.001), this had no effect on margins (p = 0.114). Overall, the model was well received by the participants, and they could see it being implemented in training (3.7 ± 0.5). CONCLUSION: The model, which is cost-effective, cryopreservable, and provides a risk-free training environment, is ideal for training in OC biopsy and resection and could be incorporated into dental, medical, or oncologic surgery curricula. Future studies should evaluate the long-term training effects using this model and its potential impact on improving patient outcomes.


Asunto(s)
Márgenes de Escisión , Neoplasias de la Boca , Animales , Humanos , Biopsia , Cadáver , Cabeza , Neoplasias de la Boca/cirugía , Neoplasias de la Boca/patología , Porcinos
4.
Med Image Anal ; 94: 103143, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38507894

RESUMEN

Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.


Asunto(s)
Núcleo Celular , Redes Neurales de la Computación , Humanos , Eosina Amarillenta-(YS) , Hematoxilina , Coloración y Etiquetado , Procesamiento de Imagen Asistido por Computador
5.
Syst Rev ; 13(1): 74, 2024 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-38409059

RESUMEN

BACKGROUND: The radial forearm free flap (RFFF) serves as a workhorse for a variety of reconstructions. Although there are a variety of surgical techniques for donor site closure after RFFF raising, the most common techniques are closure using a split-thickness skin graft (STSG) or a full-thickness skin graft (FTSG). The closure can result in wound complications and function and aesthetic compromise of the forearm and hand. The aim of the planned systematic review and meta-analysis is to compare the wound-related, function-related and aesthetics-related outcome associated with full-thickness skin grafts (FTSG) and split-thickness skin grafts (STSG) in radial forearm free flap (RFFF) donor site closure. METHODS: A systematic review and meta-analysis will be conducted. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines will be followed. Electronic databases and platforms (PubMed, Embase, Scopus, Web of Science, Cochrane Central Register of Controlled Trials (CENTRAL), China National Knowledge Infrastructure (CNKI)) and clinical trial registries (ClinicalTrials.gov, the German Clinical Trials Register, the ISRCTN registry, the International Clinical Trials Registry Platform) will be searched using predefined search terms until 15 January 2024. A rerun of the search will be carried out within 12 months before publication of the review. Eligible studies should report on the occurrence of donor site complications after raising an RFFF and closure of the defect. Included closure techniques are techniques that use full-thickness skin grafts and split-thickness skin grafts. Excluded techniques for closure are primary wound closure without the use of skin graft. Outcomes are considered wound-, functional-, and aesthetics-related. Studies that will be included are randomized controlled trials (RCTs) and prospective and retrospective comparative cohort studies. Case-control studies, studies without a control group, animal studies and cadaveric studies will be excluded. Screening will be performed in a blinded fashion by two reviewers per study. A third reviewer resolves discrepancies. The risk of bias in the original studies will be assessed using the ROBINS-I and RoB 2 tools. Data synthesis will be done using Review Manager (RevMan) 5.4.1. If appropriate, a meta-analysis will be conducted. Between-study variability will be assessed using the I2 index. If necessary, R will be used. The quality of evidence for outcomes will eventually be assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. DISCUSSION: This study's findings may help us understand both closure techniques' complication rates and may have important implications for developing future guidelines for RFFF donor site management. If available data is limited and several questions remain unanswered, additional comparative studies will be needed. SYSTEMATIC REVIEW REGISTRATION: The protocol was developed in line with the PRISMA-P extension for protocols and was registered with the International Prospective Register of Systematic Reviews (PROSPERO) on 17 September 2023 (registration number CRD42023351903).


Asunto(s)
Colgajos Tisulares Libres , Trasplante de Piel , Humanos , Trasplante de Piel/métodos , Antebrazo/cirugía , Revisiones Sistemáticas como Asunto , Metaanálisis como Asunto
6.
Diagnostics (Basel) ; 14(3)2024 Jan 27.
Artículo en Inglés | MEDLINE | ID: mdl-38337796

RESUMEN

PURPOSE: To assess the diagnostic accuracy of BMI-adapted, low-radiation and low-iodine dose, dual-source aortic CT for endoleak detection in non-obese and obese patients following endovascular aortic repair. METHODS: In this prospective single-center study, patients referred for follow-up CT after endovascular repair with a history of at least one standard triphasic (native, arterial and delayed phase) routine CT protocol were enrolled. Patients were divided into two groups and allocated to a BMI-adapted (group A, BMI < 30 kg/m2; group B, BMI ≥ 30 kg/m2) double low-dose CT (DLCT) protocol comprising single-energy arterial and dual-energy delayed phase series with virtual non-contrast (VNC) reconstructions. An in-patient comparison of the DLCT and routine CT protocol as reference standard was performed regarding differences in diagnostic accuracy, radiation dose, and image quality. RESULTS: Seventy-five patients were included in the study (mean age 73 ± 8 years, 63 (84%) male). Endoleaks were diagnosed in 20 (26.7%) patients, 11 of 53 (20.8%) in group A and 9 of 22 (40.9%) in group B. Two radiologists achieved an overall diagnostic accuracy of 98.7% and 97.3% for endoleak detection, with 100% in group A and 95.5% and 90.9% in group B. All examinations were diagnostic. The DLCT protocol reduced the effective dose from 10.0 ± 3.6 mSv to 6.1 ± 1.5 mSv (p < 0.001) and the total iodine dose from 31.5 g to 14.5 g in group A and to 17.4 g in group B. CONCLUSION: Optimized double low-dose dual-source aortic CT with VNC, arterial and delayed phase images demonstrated high diagnostic accuracy for endoleak detection and significant radiation and iodine dose reductions in both obese and non-obese patients compared to the reference standard of triple phase, standard radiation and iodine dose aortic CT.

7.
Med Image Anal ; 93: 103100, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38340545

RESUMEN

With the massive proliferation of data-driven algorithms, such as deep learning-based approaches, the availability of high-quality data is of great interest. Volumetric data is very important in medicine, as it ranges from disease diagnoses to therapy monitoring. When the dataset is sufficient, models can be trained to help doctors with these tasks. Unfortunately, there are scenarios where large amounts of data is unavailable. For example, rare diseases and privacy issues can lead to restricted data availability. In non-medical fields, the high cost of obtaining enough high-quality data can also be a concern. A solution to these problems can be the generation of realistic synthetic data using Generative Adversarial Networks (GANs). The existence of these mechanisms is a good asset, especially in healthcare, as the data must be of good quality, realistic, and without privacy issues. Therefore, most of the publications on volumetric GANs are within the medical domain. In this review, we provide a summary of works that generate realistic volumetric synthetic data using GANs. We therefore outline GAN-based methods in these areas with common architectures, loss functions and evaluation metrics, including their advantages and disadvantages. We present a novel taxonomy, evaluations, challenges, and research opportunities to provide a holistic overview of the current state of volumetric GANs.


Asunto(s)
Algoritmos , Análisis de Datos , Humanos , Enfermedades Raras
8.
Comput Methods Programs Biomed ; 245: 108013, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38262126

RESUMEN

The recent release of ChatGPT, a chat bot research project/product of natural language processing (NLP) by OpenAI, stirs up a sensation among both the general public and medical professionals, amassing a phenomenally large user base in a short time. This is a typical example of the 'productization' of cutting-edge technologies, which allows the general public without a technical background to gain firsthand experience in artificial intelligence (AI), similar to the AI hype created by AlphaGo (DeepMind Technologies, UK) and self-driving cars (Google, Tesla, etc.). However, it is crucial, especially for healthcare researchers, to remain prudent amidst the hype. This work provides a systematic review of existing publications on the use of ChatGPT in healthcare, elucidating the 'status quo' of ChatGPT in medical applications, for general readers, healthcare professionals as well as NLP scientists. The large biomedical literature database PubMed is used to retrieve published works on this topic using the keyword 'ChatGPT'. An inclusion criterion and a taxonomy are further proposed to filter the search results and categorize the selected publications, respectively. It is found through the review that the current release of ChatGPT has achieved only moderate or 'passing' performance in a variety of tests, and is unreliable for actual clinical deployment, since it is not intended for clinical applications by design. We conclude that specialized NLP models trained on (bio)medical datasets still represent the right direction to pursue for critical clinical applications.


Asunto(s)
Inteligencia Artificial , Médicos , Humanos , Bases de Datos Factuales , Procesamiento de Lenguaje Natural , PubMed
9.
Eur Radiol ; 34(1): 330-337, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37505252

RESUMEN

OBJECTIVES: Provide physicians and researchers an efficient way to extract information from weakly structured radiology reports with natural language processing (NLP) machine learning models. METHODS: We evaluate seven different German bidirectional encoder representations from transformers (BERT) models on a dataset of 857,783 unlabeled radiology reports and an annotated reading comprehension dataset in the format of SQuAD 2.0 based on 1223 additional reports. RESULTS: Continued pre-training of a BERT model on the radiology dataset and a medical online encyclopedia resulted in the most accurate model with an F1-score of 83.97% and an exact match score of 71.63% for answerable questions and 96.01% accuracy in detecting unanswerable questions. Fine-tuning a non-medical model without further pre-training led to the lowest-performing model. The final model proved stable against variation in the formulations of questions and in dealing with questions on topics excluded from the training set. CONCLUSIONS: General domain BERT models further pre-trained on radiological data achieve high accuracy in answering questions on radiology reports. We propose to integrate our approach into the workflow of medical practitioners and researchers to extract information from radiology reports. CLINICAL RELEVANCE STATEMENT: By reducing the need for manual searches of radiology reports, radiologists' resources are freed up, which indirectly benefits patients. KEY POINTS: • BERT models pre-trained on general domain datasets and radiology reports achieve high accuracy (83.97% F1-score) on question-answering for radiology reports. • The best performing model achieves an F1-score of 83.97% for answerable questions and 96.01% accuracy for questions without an answer. • Additional radiology-specific pretraining of all investigated BERT models improves their performance.


Asunto(s)
Almacenamiento y Recuperación de la Información , Radiología , Humanos , Lenguaje , Aprendizaje Automático , Procesamiento de Lenguaje Natural
10.
Sci Data ; 10(1): 796, 2023 11 11.
Artículo en Inglés | MEDLINE | ID: mdl-37951957

RESUMEN

The availability of computational hardware and developments in (medical) machine learning (MML) increases medical mixed realities' (MMR) clinical usability. Medical instruments have played a vital role in surgery for ages. To further accelerate the implementation of MML and MMR, three-dimensional (3D) datasets of instruments should be publicly available. The proposed data collection consists of 103, 3D-scanned medical instruments from the clinical routine, scanned with structured light scanners. The collection consists, for example, of instruments, like retractors, forceps, and clamps. The collection can be augmented by generating likewise models using 3D software, resulting in an inflated dataset for analysis. The collection can be used for general instrument detection and tracking in operating room settings, or a freeform marker-less instrument registration for tool tracking in augmented reality. Furthermore, for medical simulation or training scenarios in virtual reality and medical diminishing reality in mixed reality. We hope to ease research in the field of MMR and MML, but also to motivate the release of a wider variety of needed surgical instrument datasets.


Asunto(s)
Imagenología Tridimensional , Instrumentos Quirúrgicos , Realidad Virtual , Simulación por Computador , Programas Informáticos
11.
BMC Med Imaging ; 23(1): 174, 2023 10 31.
Artículo en Inglés | MEDLINE | ID: mdl-37907876

RESUMEN

BACKGROUND: With the rise in importance of personalized medicine and deep learning, we combine the two to create personalized neural networks. The aim of the study is to show a proof of concept that data from just one patient can be used to train deep neural networks to detect tumor progression in longitudinal datasets. METHODS: Two datasets with 64 scans from 32 patients with glioblastoma multiforme (GBM) were evaluated in this study. The contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used. We trained a neural network for each patient using just two scans from different timepoints to map the difference between the images. The change in tumor volume can be calculated with this map. The neural networks were a form of a Wasserstein-GAN (generative adversarial network), an unsupervised learning architecture. The combination of data augmentation and the network architecture allowed us to skip the co-registration of the images. Furthermore, no additional training data, pre-training of the networks or any (manual) annotations are necessary. RESULTS: The model achieved an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved. CONCLUSIONS: We show a novel approach to deep learning in using data from just one patient to train deep neural networks to monitor tumor change. Using two different datasets to evaluate the results shows the potential to generalize the method.


Asunto(s)
Glioblastoma , Redes Neurales de la Computación , Humanos , Imagen por Resonancia Magnética , Encéfalo , Glioblastoma/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
12.
Sci Rep ; 13(1): 20229, 2023 11 19.
Artículo en Inglés | MEDLINE | ID: mdl-37981641

RESUMEN

Traditional convolutional neural network (CNN) methods rely on dense tensors, which makes them suboptimal for spatially sparse data. In this paper, we propose a CNN model based on sparse tensors for efficient processing of high-resolution shapes represented as binary voxel occupancy grids. In contrast to a dense CNN that takes the entire voxel grid as input, a sparse CNN processes only on the non-empty voxels, thus reducing the memory and computation overhead caused by the sparse input data. We evaluate our method on two clinically relevant skull reconstruction tasks: (1) given a defective skull, reconstruct the complete skull (i.e., skull shape completion), and (2) given a coarse skull, reconstruct a high-resolution skull with fine geometric details (shape super-resolution). Our method outperforms its dense CNN-based counterparts in the skull reconstruction task quantitatively and qualitatively, while requiring substantially less memory for training and inference. We observed that, on the 3D skull data, the overall memory consumption of the sparse CNN grows approximately linearly during inference with respect to the image resolutions. During training, the memory usage remains clearly below increases in image resolution-an [Formula: see text] increase in voxel number leads to less than [Formula: see text] increase in memory requirements. Our study demonstrates the effectiveness of using a sparse CNN for skull reconstruction tasks, and our findings can be applied to other spatially sparse problems. We prove this by additional experimental results on other sparse medical datasets, like the aorta and the heart. Project page at https://github.com/Jianningli/SparseCNN .


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Cráneo/diagnóstico por imagen , Cabeza
14.
JCO Clin Cancer Inform ; 7: e2300038, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37527475

RESUMEN

PURPOSE: Quantifying treatment response to gastroesophageal junction (GEJ) adenocarcinomas is crucial to provide an optimal therapeutic strategy. Routinely taken tissue samples provide an opportunity to enhance existing positron emission tomography-computed tomography (PET/CT)-based therapy response evaluation. Our objective was to investigate if deep learning (DL) algorithms are capable of predicting the therapy response of patients with GEJ adenocarcinoma to neoadjuvant chemotherapy on the basis of histologic tissue samples. METHODS: This diagnostic study recruited 67 patients with I-III GEJ adenocarcinoma from the multicentric nonrandomized MEMORI trial including three German university hospitals TUM (University Hospital Rechts der Isar, Munich), LMU (Hospital of the Ludwig-Maximilians-University, Munich), and UME (University Hospital Essen, Essen). All patients underwent baseline PET/CT scans and esophageal biopsy before and 14-21 days after treatment initiation. Treatment response was defined as a ≥35% decrease in SUVmax from baseline. Several DL algorithms were developed to predict PET/CT-based responders and nonresponders to neoadjuvant chemotherapy using digitized histopathologic whole slide images (WSIs). RESULTS: The resulting models were trained on TUM (n = 25 pretherapy, n = 47 on-therapy) patients and evaluated on our internal validation cohort from LMU and UME (n = 17 pretherapy, n = 15 on-therapy). Compared with multiple architectures, the best pretherapy network achieves an area under the receiver operating characteristic curve (AUROC) of 0.81 (95% CI, 0.61 to 1.00), an area under the precision-recall curve (AUPRC) of 0.82 (95% CI, 0.61 to 1.00), a balanced accuracy of 0.78 (95% CI, 0.60 to 0.94), and a Matthews correlation coefficient (MCC) of 0.55 (95% CI, 0.18 to 0.88). The best on-therapy network achieves an AUROC of 0.84 (95% CI, 0.64 to 1.00), an AUPRC of 0.82 (95% CI, 0.56 to 1.00), a balanced accuracy of 0.80 (95% CI, 0.65 to 1.00), and a MCC of 0.71 (95% CI, 0.38 to 1.00). CONCLUSION: Our results show that DL algorithms can predict treatment response to neoadjuvant chemotherapy using WSI with high accuracy even before therapy initiation, suggesting the presence of predictive morphologic tissue biomarkers.


Asunto(s)
Adenocarcinoma , Aprendizaje Profundo , Humanos , Terapia Neoadyuvante , Tomografía Computarizada por Tomografía de Emisión de Positrones , Adenocarcinoma/patología , Unión Esofagogástrica/patología
15.
Comput Biol Med ; 165: 107365, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37647783

RESUMEN

Surveillance imaging of patients with chronic aortic diseases, such as aneurysms and dissections, relies on obtaining and comparing cross-sectional diameter measurements along the aorta at predefined aortic landmarks, over time. The orientation of the cross-sectional measuring planes at each landmark is currently defined manually by highly trained operators. Centerline-based approaches are unreliable in patients with chronic aortic dissection, because of the asymmetric flow channels, differences in contrast opacification, and presence of mural thrombus, making centerline computations or measurements difficult to generate and reproduce. In this work, we present three alternative approaches - INS, MCDS, MCDbS - based on convolutional neural networks and uncertainty quantification methods to predict the orientation (ϕ,θ) of such cross-sectional planes. For the monitoring of chronic aortic dissections, we show how a dataset of 162 CTA volumes with overall 3273 imperfect manual annotations routinely collected in a clinic can be efficiently used to accomplish this task, despite the presence of non-negligible interoperator variabilities in terms of mean absolute error (MAE) and 95% limits of agreement (LOA). We show how, despite the large limits of agreement in the training data, the trained model provides faster and more reproducible results than either an expert user or a centerline method. The remaining disagreement lies within the variability produced by three independent expert annotators and matches the current state of the art, providing a similar error, but in a fraction of the time.


Asunto(s)
Disección Aórtica , Angiografía por Tomografía Computarizada , Humanos , Estudios Retrospectivos , Incertidumbre , Aorta , Disección Aórtica/diagnóstico por imagen
16.
Med Image Anal ; 88: 102865, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37331241

RESUMEN

Cranial implants are commonly used for surgical repair of craniectomy-induced skull defects. These implants are usually generated offline and may require days to weeks to be available. An automated implant design process combined with onsite manufacturing facilities can guarantee immediate implant availability and avoid secondary intervention. To address this need, the AutoImplant II challenge was organized in conjunction with MICCAI 2021, catering for the unmet clinical and computational requirements of automatic cranial implant design. The first edition of AutoImplant (AutoImplant I, 2020) demonstrated the general capabilities and effectiveness of data-driven approaches, including deep learning, for a skull shape completion task on synthetic defects. The second AutoImplant challenge (i.e., AutoImplant II, 2021) built upon the first by adding real clinical craniectomy cases as well as additional synthetic imaging data. The AutoImplant II challenge consisted of three tracks. Tracks 1 and 3 used skull images with synthetic defects to evaluate the ability of submitted approaches to generate implants that recreate the original skull shape. Track 3 consisted of the data from the first challenge (i.e., 100 cases for training, and 110 for evaluation), and Track 1 provided 570 training and 100 validation cases aimed at evaluating skull shape completion algorithms at diverse defect patterns. Track 2 also made progress over the first challenge by providing 11 clinically defective skulls and evaluating the submitted implant designs on these clinical cases. The submitted designs were evaluated quantitatively against imaging data from post-craniectomy as well as by an experienced neurosurgeon. Submissions to these challenge tasks made substantial progress in addressing issues such as generalizability, computational efficiency, data augmentation, and implant refinement. This paper serves as a comprehensive summary and comparison of the submissions to the AutoImplant II challenge. Codes and models are available at https://github.com/Jianningli/Autoimplant_II.


Asunto(s)
Prótesis e Implantes , Cráneo , Humanos , Cráneo/diagnóstico por imagen , Cráneo/cirugía , Craneotomía/métodos , Cabeza
17.
Comput Med Imaging Graph ; 107: 102238, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37207396

RESUMEN

The segmentation of histopathological whole slide images into tumourous and non-tumourous types of tissue is a challenging task that requires the consideration of both local and global spatial contexts to classify tumourous regions precisely. The identification of subtypes of tumour tissue complicates the issue as the sharpness of separation decreases and the pathologist's reasoning is even more guided by spatial context. However, the identification of detailed tissue types is crucial for providing personalized cancer therapies. Due to the high resolution of whole slide images, existing semantic segmentation methods, restricted to isolated image sections, are incapable of processing context information beyond. To take a step towards better context comprehension, we propose a patch neighbour attention mechanism to query the neighbouring tissue context from a patch embedding memory bank and infuse context embeddings into bottleneck hidden feature maps. Our memory attention framework (MAF) mimics a pathologist's annotation procedure - zooming out and considering surrounding tissue context. The framework can be integrated into any encoder-decoder segmentation method. We evaluate the MAF on two public breast cancer and liver cancer data sets and an internal kidney cancer data set using famous segmentation models (U-Net, DeeplabV3) and demonstrate the superiority over other context-integrating algorithms - achieving a substantial improvement of up to 17% on Dice score. The code is publicly available at https://github.com/tio-ikim/valuing-vicinity.


Asunto(s)
Neoplasias Renales , Neoplasias Hepáticas , Humanos , Semántica , Algoritmos , Procesamiento de Imagen Asistido por Computador
19.
Eur J Nucl Med Mol Imaging ; 50(7): 2196-2209, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36859618

RESUMEN

PURPOSE: The aim of this study was to systematically evaluate the effect of thresholding algorithms used in computer vision for the quantification of prostate-specific membrane antigen positron emission tomography (PET) derived tumor volume (PSMA-TV) in patients with advanced prostate cancer. The results were validated with respect to the prognostication of overall survival in patients with advanced-stage prostate cancer. MATERIALS AND METHODS: A total of 78 patients who underwent [177Lu]Lu-PSMA-617 radionuclide therapy from January 2018 to December 2020 were retrospectively included in this study. [68Ga]Ga-PSMA-11 PET images, acquired prior to radionuclide therapy, were used for the analysis of thresholding algorithms. All PET images were first analyzed semi-automatically using a pre-evaluated, proprietary software solution as the baseline method. Subsequently, five histogram-based thresholding methods and two local adaptive thresholding methods that are well established in computer vision were applied to quantify molecular tumor volume. The resulting whole-body molecular tumor volumes were validated with respect to the prognostication of overall patient survival as well as their statistical correlation to the baseline methods and their performance on standardized phantom scans. RESULTS: The whole-body PSMA-TVs, quantified using different thresholding methods, demonstrate a high positive correlation with the baseline methods. We observed the highest correlation with generalized histogram thresholding (GHT) (Pearson r (r), p value (p): r = 0.977, p < 0.001) and Sauvola thresholding (r = 0.974, p < 0.001) and the lowest correlation with Multiotsu (r = 0.877, p < 0.001) and Yen thresholding methods (r = 0.878, p < 0.001). The median survival time of all patients was 9.87 months (95% CI [9.3 to 10.13]). Stratification by median whole-body PSMA-TV resulted in a median survival time from 11.8 to 13.5 months for the patient group with lower tumor burden and 6.5 to 6.6 months for the patient group with higher tumor burden. The patient group with lower tumor burden had significantly higher probability of survival (p < 0.00625) in eight out of nine thresholding methods (Fig. 2); those methods were SUVmax50 (p = 0.0038), SUV ≥3 (p = 0.0034), Multiotsu (p = 0.0015), Yen (p = 0.0015), Niblack (p = 0.001), Sauvola (p = 0.0001), Otsu (p = 0.0053), and Li thresholding (p = 0.0053). CONCLUSION: Thresholding methods commonly used in computer vision are promising tools for the semiautomatic quantification of whole-body PSMA-TV in [68Ga]Ga-PSMA-11-PET. The proposed algorithm-driven thresholding strategy is less arbitrary and less prone to biases than thresholding with predefined values, potentially improving the application of whole-body PSMA-TV as an imaging biomarker.


Asunto(s)
Neoplasias de la Próstata Resistentes a la Castración , Neoplasias de la Próstata , Humanos , Masculino , Radioisótopos de Galio , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Tomografía de Emisión de Positrones , Antígeno Prostático Específico , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Neoplasias de la Próstata/patología , Neoplasias de la Próstata Resistentes a la Castración/patología , Estudios Retrospectivos , Carga Tumoral
20.
Med Image Anal ; 85: 102757, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36706637

RESUMEN

The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.


Asunto(s)
Realidad Aumentada , Humanos , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...